Diptera wing classification using Topological Data Analysis

Authors
Affiliation

Guilherme Vituri F. Pinto

Universidade Estadual Paulista

Sergio Ura

Northon

Published

February 26, 2026

Abstract

We apply tools from Topological Data Analysis (TDA) to classify Diptera families based on wing venation patterns. Using three complementary filtration strategies — Vietoris-Rips on point clouds, radial filtrations, and directional (height) filtrations on wing images — we extract H0 and H1 topological features via extended summary statistics and compare classifiers via leave-one-out cross-validation. We focus on interpretable models (LDA, Decision Trees) to identify explainable topological criteria that distinguish families.

Keywords

Topological Data Analysis, Persistent homology, Diptera classification, Wing venation

1 Introduction

The order Diptera (true flies) comprises over 150,000 described species across more than 150 families. Wing venation patterns are a classical diagnostic character in Diptera systematics: the arrangement, branching and connectivity of veins varies markedly across families and provides a natural morphological signature.

In this work, we apply Topological Data Analysis (TDA) to the problem of classifying Diptera families from wing images. TDA provides a framework for extracting shape descriptors that are robust to continuous deformations — exactly the kind of invariance desirable when comparing biological structures that vary in scale, orientation and minor deformations across individuals.

We employ two complementary filtration strategies:

  1. Vietoris-Rips filtration on point-cloud samples of wing silhouettes — captures global loop structure
  2. Radial filtration from the wing centroid to the periphery — captures how vein topology is organized from center outward

For each filtration, we compute both H0 (connected components / vein branching) and H1 (loops / enclosed cells) persistence, then extract extended summary statistics (17 interpretable features per diagram after dropping skewness and kurtosis) and classify using simple, explainable models (LDA, Decision Trees, Random Forests). The goal is to find interpretable topological criteria for family identification.

Why only two filtrations?

We initially tested five filtration strategies including directional height filtrations (8 directions), Euclidean Distance Transform (EDT), and grayscale cubical filtrations. However: (a) directional filtrations are noise-sensitive — in images with isolated pixels and incomplete vein segmentation, each sweep direction creates spurious topological features; (b) EDT produces trivial persistence on binarized images where veins are ~1 pixel wide; (c) cubical (grayscale) filtrations are meaningless on already-binarized black-and-white images. See NOTES.md for details on discarded methods.

2 Methods

2.1 Data loading and preprocessing

All images are in the images/processed directory. For each image, we load it, apply a Gaussian blur (to close small gaps in the wing membrane and keep it connected), crop to the bounding box, and resize to 150 pixels of height.

Total images after deduplication: 70
Families: ["Asilidae", "Bibionidae", "Ceratopogonidae", "Chironomidae", "Rhagionidae", "Sciaridae", "Simuliidae", "Tabanidae", "Tipulidae"]

Samples per family:
  Asilidae: 8
  Bibionidae: 6
  Ceratopogonidae: 8
  Chironomidae: 8
  Rhagionidae: 4
  Sciaridae: 6
  Simuliidae: 7
  Tabanidae: 11
  Tipulidae: 12
Source: Article Notebook

2.1.1 Excluding small families

Families with fewer than 3 samples (e.g. Pelecorhynchidae with \(n=2\)) can distort cross-validation results—a single misclassification changes accuracy by 50%. We provide a filtered version and run the analysis both ways.


Filtered dataset: 70 samples, 9 families

2.2 Example: forcing connectivity on 5 wings

The chunk below selects 5 wings (prioritizing those with the largest number of disconnected components before correction), then compares the binary pixel set before and after connect_pixel_components.

5×5 DataFrame
Row sample n_components_before n_components_after n_pixels_before n_pixels_after
String Int64 Int64 Int64 Int64
1 simulidae 27 101 1 9233 9568
2 biobionidae 9 88 1 9425 9605
3 simulidae 26 80 1 11556 11778
4 chironomidae 19 75 1 11504 11763
5 simulidae 24 71 1 9019 9189

3 Topological feature extraction

We compute persistent homology using three filtration strategies. For the Vietoris-Rips filtration on connected point clouds, H0 is uninformative (single infinite bar), so we use only H1. For the radial filtration (computed via sublevel-set persistence on the pixel grid), we use only H0, which captures when disconnected vein segments merge as the radial sweep grows outward — directly encoding vein count and branching patterns. Radial H1 is omitted because pixelated binary images produce very few clean loops under this filtration. We also compute directional (height) H0 persistence in four sweep directions, capturing how vein connectivity changes along different axes.

What is persistent homology?

Persistent homology is the main tool of TDA. Given a shape or dataset, it tracks how topological features — connected components (dimension 0), loops (dimension 1), voids (dimension 2), etc. — appear and disappear as we “grow” the shape through a filtration parameter. Each feature has a birth time (when it appears) and a death time (when it gets filled in). The collection of all (birth, death) pairs is called a persistence diagram. Features with long lifetimes (high persistence = death \(-\) birth) represent genuine topological structure, while short-lived features are typically noise.

3.1 Strategy 1: Vietoris-Rips filtration on point clouds

Vietoris-Rips filtration

Given a set of points in \(\mathbb{R}^n\), the Vietoris-Rips complex at scale \(\varepsilon\) connects any subset of points that are pairwise within distance \(\varepsilon\). As \(\varepsilon\) increases from 0, we obtain a nested sequence of simplicial complexes — the Rips filtration. This is the most common filtration in TDA for point-cloud data. It is computationally expensive (since it must consider all pairwise distances), which is why we subsample the point clouds.

We sample 750 points from each wing silhouette using farthest-point sampling (which ensures good coverage of the shape), then compute 1-dimensional Rips persistence:

70-element Vector{Matrix{Float64}}:
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 ⋮
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]
 [1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0; … ; 1.0 1.0 … 1.0 1.0; 1.0 1.0 … 1.0 1.0]

3.2 Strategy 2: Radial filtration

Radial filtration

The radial filtration assigns each foreground pixel a value equal to its distance from the centroid of the wing. Sublevel-set persistence on this function captures how topological features are distributed from the center of the wing outward. This is complementary to the Rips filtration, which captures global loop structure without spatial information.

We compute H0 persistence for the radial filtration, capturing how disconnected vein segments merge as the radial sweep grows outward:

3.3 Strategy 3: Directional (height) filtration

Directional filtration

The directional (height) filtration sweeps a hyperplane across the wing along a given direction. Each foreground pixel is assigned a value equal to its projection onto the sweep direction. As the sweep progresses, disconnected vein segments merge — captured by H0 persistence. We use four directions: horizontal [1,0], vertical [0,1], and both diagonals [1,1] and [1,-1].

3.4 Visualizing the radial filtration

The radial filtration assigns each foreground pixel a value proportional to its distance from the wing centroid. Below we visualize the radial filtration arrays and the resulting H0 persistence diagrams for one wing per family:

3.6 Examples: persistence diagrams from each strategy

Below we show persistence diagrams from the Rips, radial, and directional filtrations for one specimen per family:

3.7 Extended summary statistics

We extract extended summary statistics from each persistence diagram using pd_statistics_extended, then remove skewness and kurtosis (near-zero relevance in tree-based models):

  • Count of intervals, max/total/total² persistence
  • Quantiles (10th, 25th, 50th, 75th, 90th)
  • Entropy, std of persistence
  • Median birth, median death, std birth, std death
  • Mean midlife = mean of (birth + death)/2
  • Persistence range = max - min persistence
Dropped stats: kurtosis, skewness
Statistics per diagram retained: 17 features
  Rips H1: (70, 17)
  Radial H0: (70, 17)
  Dir H0 (horiz): (70, 17)
  Dir H0 (vert): (70, 17)
  Dir H0 (diag1): (70, 17)
  Dir H0 (diag2): (70, 17)

3.7.1 Statistics comparison by family

4 Classification

We build the feature matrix from the extended summary statistics of all filtrations:

Feature matrix: (70, 102)
  Retained stats per diagram: 17
  102 features × 70 samples
  Feature-to-sample ratio: 1.46
Leave-one-out cross-validation (LOOCV)

With only 72 samples, we use leave-one-out cross-validation: for each sample, the classifier is trained on all other samples and tested on the held-out one. LOOCV has low bias (nearly the entire dataset is used for training) and is the standard validation strategy for small datasets.

4.1 Decision tree

We use a single decision tree as our most interpretable classifier. The tree structure itself provides readable classification rules:

10×6 DataFrame
Row max_depth min_samples_leaf n_correct accuracy balanced_accuracy macro_f1
Int64 Int64 Int64 Float64 Float64 Float64
1 8 2 45 0.642857 0.641655 0.636277
2 8 3 44 0.628571 0.627766 0.619195
3 8 1 43 0.614286 0.613877 0.603163
4 6 2 41 0.585714 0.535173 0.50649
5 6 1 39 0.557143 0.507395 0.484324
6 6 3 39 0.557143 0.507395 0.484324
7 5 1 35 0.5 0.428692 0.427947
8 5 2 35 0.5 0.428692 0.427947
9 5 3 35 0.5 0.428692 0.427947
10 4 1 26 0.371429 0.303692 0.294893
Best Decision Tree LOOCV: 45/70 (64.3%)
  max_depth=8, min_samples_leaf=2
  Features considered at each split: 102 (all)
  Features with nonzero importance: 12 / 102
Balanced accuracy: 64.2%
12×2 DataFrame
Row feature importance
String Float64
1 Radial_H0__q75 0.145263
2 Dir_H0_diag1__median_birth 0.142809
3 Radial_H0__median 0.123341
4 Dir_H0_diag2__q10 0.111196
5 Dir_H0_horiz__q25 0.109666
6 Dir_H0_horiz__median_death 0.103821
7 Rips_H1__entropy 0.0918133
8 Dir_H0_diag2__total_pers 0.0822494
9 Dir_H0_diag1__q25 0.0490946
10 Radial_H0__std_pers 0.0146067
11 Dir_H0_diag1__std_birth 0.0133894
12 Dir_H0_vert__total_pers 0.0127518
Deep display tree (max_depth=10): features with nonzero importance = 14 / 102

Decision tree structure (LOOCV best, max_depth=8):
Feature 25: "Radial_H0__q75" < 0.001729 ?
├─ Feature 28: "Radial_H0__std_pers" < 0.151 ?
    ├─ Tipulidae : 9/9
    └─ Ceratopogonidae : 1/2
└─ Feature 80: "Dir_H0_diag1__median_birth" < 0.6497 ?
    ├─ Feature 24: "Radial_H0__median" < 0.003027 ?
        ├─ Feature 47: "Dir_H0_horiz__median_death" < 0.104 ?
            ├─ Feature 82: "Dir_H0_diag1__std_birth" < 0.3258 ?
                ├─ Asilidae : 1/2
                └─ Asilidae : 6/6
            └─ Feature 90: "Dir_H0_diag2__q10" < 0.001965 ?
                ├─ Chironomidae : 7/7
                └─ Feature 10: "Rips_H1__entropy" < 2.005 ?
                    ├─ Feature 40: "Dir_H0_horiz__q25" < 0.006711 ?
                        ├─ Feature 54: "Dir_H0_vert__total_pers" < 38.16 ?
                            ├─ Sciaridae : 5/5
                            └─ Sciaridae : 1/2
                        └─ Ceratopogonidae : 7/7
                    └─ Feature 88: "Dir_H0_diag2__total_pers" < 22.53 ?
                        ├─ Feature 74: "Dir_H0_diag1__q25" < 0.002146 ?
                            ├─ Tipulidae : 2/4
                            └─ Rhagionidae : 4/4
                        └─ Bibionidae : 6/6
        └─ Simuliidae : 7/7
    └─ Tabanidae : 9/9

4.2 LDA (Linear Discriminant Analysis)

Linear Discriminant Analysis (LDA)

LDA finds a linear projection of the feature space that maximizes the ratio of between-class variance to within-class variance. The projected data is then classified with a simple 1-NN rule. LDA is a classical method that works well when classes are approximately Gaussian and the number of features is not too large relative to the number of samples.

LDA LOOCV: 42/70 (60.0%)
Balanced accuracy: 55.9%
Macro-F1: 54.7%

4.3 Balanced Random Forest

Random Forest

A Random Forest is an ensemble of decision trees, each trained on a bootstrap sample of the data using a random subset of features. The final prediction is the majority vote across all trees. Balanced Random Forests oversample minority classes (or weight them inversely to their frequency) so that rare families are not drowned out by common ones — important here because Tipulidae has 12 samples while some families have only 2–3. Random Forests are robust to overfitting, handle high-dimensional features well, and provide built-in feature importance estimates.

8×7 DataFrame
Row n_trees max_depth min_leaf n_correct accuracy balanced_accuracy macro_f1
Int64 Int64 Int64 Int64 Float64 Float64 Float64
1 200 -1 1 55 0.785714 0.776936 0.771556
2 200 8 1 55 0.785714 0.776936 0.771556
3 200 12 1 55 0.785714 0.776936 0.771556
4 200 -1 2 52 0.742857 0.748316 0.738452
5 200 8 2 52 0.742857 0.748316 0.738452
6 200 12 2 52 0.742857 0.748316 0.738452
7 1000 -1 1 52 0.742857 0.729798 0.715404
8 1000 -1 2 52 0.742857 0.724327 0.716631
Best Balanced RF LOOCV: 55/70 (78.6%)
  n_trees=200, max_depth=-1, min_leaf=1
  n_features per split: 10 (√102)
Balanced accuracy: 77.7%
Macro-F1: 77.2%

4.4 SVM

Support Vector Machine (SVM)

An SVM finds the hyperplane that maximizes the margin between classes. The RBF (Radial Basis Function) kernel maps data into a high-dimensional space where linear separation becomes possible, controlled by a regularization parameter \(C\) (penalty for misclassification): small \(C\) allows wider margins with more misclassifications, large \(C\) enforces tight boundaries. The linear kernel finds a separating hyperplane directly in the original feature space and is less prone to overfitting when \(p \gg n\).

8×6 DataFrame
Row method n_correct n_total accuracy balanced_accuracy macro_f1
String Int64 Int64 Float64 Float64 Float64
1 SVM (Linear, C=1.0) 57 70 0.814286 0.776936 0.776793
2 SVM (Linear, C=10.0) 57 70 0.814286 0.776936 0.776793
3 SVM (Linear, C=100.0) 57 70 0.814286 0.776936 0.776793
4 SVM (Linear, C=0.1) 56 70 0.8 0.763047 0.762963
5 SVM (RBF, C=10.0) 53 70 0.757143 0.728656 0.725312
6 SVM (RBF, C=100.0) 53 70 0.757143 0.728656 0.725312
7 SVM (RBF, C=1.0) 42 70 0.6 0.551046 0.547948
8 SVM (Linear, C=0.01) 42 70 0.6 0.535354 0.504925
Best SVM (selected on the same LOOCV table; optimistic): SVM (Linear, C=1.0)
  57/70 (81.4%)
  Balanced accuracy: 77.7%
  Macro-F1: 77.7%

SVM Nested LOOCV (honest estimate):
  56/70 (80.0%)
  Balanced accuracy: 76.3%
  Macro-F1: 76.3%
3×3 DataFrame
Row kernel cost n_outer_folds
String Float64 Int64
1 Linear 0.1 42
2 Linear 1.0 26
3 RBF 10.0 2

4.5 SVM with PCA dimensionality reduction

To reduce overfitting risk, we project features with PCA inside each LOOCV fold, then train SVM on the reduced space. We scan different variance-retention targets and kernels/costs to identify a good dimensionality.

10×9 DataFrame
Row variance_ratio median_n_components kernel cost n_correct n_total accuracy balanced_accuracy macro_f1
Float64 Int64 String Float64 Int64 Int64 Float64 Float64 Float64
1 0.98 25 Linear 1.0 55 70 0.785714 0.752946 0.749646
2 0.98 25 Linear 10.0 55 70 0.785714 0.752946 0.749646
3 0.98 25 Linear 0.1 55 70 0.785714 0.749158 0.745157
4 0.95 17 Linear 1.0 55 70 0.785714 0.744529 0.743955
5 0.95 17 Linear 10.0 55 70 0.785714 0.744529 0.743955
6 0.95 17 Linear 0.1 53 70 0.757143 0.716751 0.717378
7 0.9 11 Linear 0.1 52 70 0.742857 0.707492 0.700383
8 0.9 11 Linear 1.0 50 70 0.714286 0.692761 0.689257
9 0.9 11 Linear 10.0 50 70 0.714286 0.692761 0.689257
10 0.8 6 Linear 1.0 48 70 0.685714 0.67803 0.676142
Selected SVM+PCA setting (LOOCV table):
  kernel=Linear, C=1.0
  variance ratio=98.0%
  median retained dimensions=25
  accuracy=55/70 (78.6%)
  balanced accuracy=75.3%
  macro-F1=75.0%

SVM+PCA Nested LOOCV:
  54/70 (77.1%)
  Balanced accuracy: 73.9%
  Macro-F1: 73.6%

4.6 SVM using only Vietoris-Rips H1 statistics

This analysis keeps only features from the 1D Vietoris-Rips barcode (stats_rips) and tests whether loop-based global topology alone is sufficient for classification.

8×6 DataFrame
Row method n_correct n_total accuracy balanced_accuracy macro_f1
String Int64 Int64 Float64 Float64 Float64
1 SVM Rips H1 (RBF, C=10.0) 44 70 0.628571 0.566619 0.565316
2 SVM Rips H1 (RBF, C=100.0) 44 70 0.628571 0.574194 0.571167
3 SVM Rips H1 (RBF, C=1.0) 43 70 0.614286 0.525794 0.489005
4 SVM Rips H1 (Linear, C=0.1) 42 70 0.6 0.517196 0.486484
5 SVM Rips H1 (Linear, C=1.0) 41 70 0.585714 0.529401 0.518325
6 SVM Rips H1 (Linear, C=10.0) 40 70 0.571429 0.508237 0.495523
7 SVM Rips H1 (Linear, C=100.0) 40 70 0.571429 0.508237 0.495523
8 SVM Rips H1 (Linear, C=0.01) 26 70 0.371429 0.297559 0.212996
Best SVM with Rips-only stats:
  SVM Rips H1 (RBF, C=10.0)
  44/70 (62.9%)
  Balanced accuracy: 56.7%
  Macro-F1: 56.5%

SVM Rips-only Nested LOOCV:
  41/70 (58.6%)
  Balanced accuracy: 52.1%
  Macro-F1: 51.3%

4.7 k-NN on Rips Wasserstein distance

Wasserstein distance between persistence diagrams

The Wasserstein distance \(W_q\) between two persistence diagrams is the cost of the optimal matching between their points (including matching points to the diagonal, representing trivial features). With \(q=1\) it equals the Earth Mover’s Distance; with \(q=2\) it penalizes large mismatches more heavily.

k-Nearest Neighbors (k-NN) classifies a query by majority vote among its \(k\) nearest neighbors in the distance matrix. With \(k=1\), this is the simplest possible classifier — completely hyperparameter-free — and serves as a useful baseline.

As a complementary approach, we compute pairwise Wasserstein distances between the Rips H1 persistence diagrams and classify with k-NN. Unlike the feature-based classifiers above, this operates directly on the persistence diagrams without extracting summary statistics, and is therefore less susceptible to information loss during featurization.

Rips W1 non-finite entries (raw): 0
Rips W2 non-finite entries (raw): 0
6×6 DataFrame
Row method n_correct n_total accuracy balanced_accuracy macro_f1
String Int64 Int64 Float64 Float64 Float64
1 3-NN Rips Wass-1 48 70 0.685714 0.644481 0.635219
2 5-NN Rips Wass-1 48 70 0.685714 0.624158 0.588523
3 1-NN Rips Wass-1 47 70 0.671429 0.639851 0.633751
4 5-NN Rips Wass-2 47 70 0.671429 0.617544 0.596351
5 1-NN Rips Wass-2 45 70 0.642857 0.601972 0.600092
6 3-NN Rips Wass-2 41 70 0.585714 0.545575 0.525557

4.8 Summary of all classifiers

8×5 DataFrame
Row method n_correct n_total accuracy balanced_accuracy
String Int64 Int64 Float64 Float64
1 SVM (Nested LOOCV) 56 70 0.8 0.763047
2 Balanced RF (T=200) 55 70 0.785714 0.776936
3 SVM + PCA (Nested LOOCV) 54 70 0.771429 0.739057
4 3-NN Rips Wass-1 48 70 0.685714 0.644481
5 5-NN Rips Wass-2 47 70 0.671429 0.617544
6 Decision Tree (d=8) 45 70 0.642857 0.641655
7 LDA 42 70 0.6 0.558622
8 SVM Rips-only (Nested LOOCV) 41 70 0.585714 0.520984

5 Top-2 accuracy

A single-label classifier must pick exactly one family. Here we relax this to top-2: the model returns the two most probable families and the prediction is considered correct if the true family is among them. This is a useful diagnostic for borderline cases where two families look very similar.

We re-run LOOCV for the best balanced RF, collecting class probabilities via apply_forest_proba.

Top-1 accuracy : 55/70 (78.6%)
Top-2 accuracy : 62/70 (88.6%)
Gain from top-2: +7 correctly recovered
9×6 DataFrame
Row family n_total top1_correct top2_correct top1_acc_pct top2_acc_pct
String Int64 Int64 Int64 Float64 Float64
1 Asilidae 8 3 6 37.5 75.0
2 Ceratopogonidae 8 5 5 62.5 62.5
3 Chironomidae 8 6 7 75.0 87.5
4 Rhagionidae 4 3 3 75.0 75.0
5 Bibionidae 6 5 6 83.3 100.0
6 Sciaridae 6 5 6 83.3 100.0
7 Tabanidae 11 10 11 90.9 100.0
8 Tipulidae 12 11 11 91.7 91.7
9 Simuliidae 7 7 7 100.0 100.0

6 Which features drive the classification?

20×2 DataFrame
Row feature importance
String Float64
1 Radial_H0__q75 1.0
2 Rips_H1__entropy 0.982125
3 Rips_H1__total_pers 0.897672
4 Dir_H0_diag2__median 0.861033
5 Rips_H1__median 0.811281
6 Rips_H1__q90 0.801856
7 Dir_H0_diag1__median_birth 0.798053
8 Radial_H0__std_birth 0.795792
9 Dir_H0_diag2__q10 0.792933
10 Dir_H0_diag2__q25 0.745071
11 Dir_H0_horiz__count 0.698
12 Rips_H1__q75 0.656553
13 Rips_H1__total_pers2 0.654088
14 Radial_H0__median 0.651447
15 Dir_H0_diag1__q25 0.647644
16 Dir_H0_diag2__q75 0.637396
17 Dir_H0_diag1__q10 0.633339
18 Rips_H1__max_pers 0.619473
19 Dir_H0_diag1__median 0.609186
20 Rips_H1__count 0.600122

7 Feature ablation

7×6 DataFrame
Row filtrations n_features lda_correct lda_accuracy rf_correct rf_accuracy
String Int64 Int64 Float64 Int64 Float64
1 Rips + Radial H0 34 52 74.3 47 67.1
2 All (Rips + Radial H0 + Dir H0) 102 42 60.0 50 71.4
3 Rips H1 only 17 41 58.6 44 62.9
4 Radial H0 only 17 41 58.6 41 58.6
5 Rips + Dir H0 85 38 54.3 50 71.4
6 Dir H0 only (4 dirs) 68 35 50.0 41 58.6
7 Radial H0 + Dir H0 85 34 48.6 46 65.7

8 Honest evaluation (Nested LOOCV)

Nested cross-validation

Standard LOOCV can give optimistically biased estimates when hyperparameters are tuned on the same data. Nested LOOCV adds an inner cross-validation loop: for each held-out test sample, the best hyperparameters are selected using only the training fold. This provides an unbiased estimate of generalization performance.

=== Nested LOOCV: Balanced RF ===
Features: 102 (Rips + Radial stats)
Accuracy: 53/70 (75.7%)
Balanced accuracy: 74.4%
Macro-F1: 74.0%
95% Wilson CI: [64.5%, 84.2%]

8.1 Confusion matrix

Per-class accuracy (Nested LOOCV):
  Asilidae: 2/8 (25.0%)
  Bibionidae: 5/6 (83.3%)
  Ceratopogonidae: 7/8 (87.5%)
  Chironomidae: 4/8 (50.0%)
  Rhagionidae: 2/4 (50.0%)
  Sciaridae: 6/6 (100.0%)
  Simuliidae: 7/7 (100.0%)
  Tabanidae: 9/11 (81.8%)
  Tipulidae: 11/12 (91.7%)

8.2 Misclassified wings: visual inspection

For each misclassified specimen we show the wing image and radial filtration alongside the 4 nearest neighbours (by Euclidean distance in the z-scored feature space). This helps understand whether errors are due to genuine visual ambiguity or artefacts of segmentation.

Misclassified samples (17 / 70):
  [1] A-11  true=Asilidae  predicted=Tabanidae
  [2] A-13  true=Asilidae  predicted=Tabanidae
  [3] A-17  true=Asilidae  predicted=Tipulidae
  [4] A-19  true=Asilidae  predicted=Tipulidae
  [6] A-3  true=Asilidae  predicted=Tabanidae
  [7] A-5  true=Asilidae  predicted=Tipulidae
  [9] T-19  true=Tipulidae  predicted=Tabanidae
  [26] B-9  true=Bibionidae  predicted=Simuliidae
  [34] C-45  true=Ceratopogonidae  predicted=Sciaridae
  [36] C-14  true=Chironomidae  predicted=Simuliidae
  [37] C-15  true=Chironomidae  predicted=Ceratopogonidae
  [40] C-18  true=Chironomidae  predicted=Ceratopogonidae
  [41] C-19  true=Chironomidae  predicted=Bibionidae
  [44] R-11  true=Rhagionidae  predicted=Tabanidae
  [46] R-9  true=Rhagionidae  predicted=Tabanidae
  [65] T-33  true=Tabanidae  predicted=Asilidae
  [69] T-37  true=Tabanidae  predicted=Asilidae
=== Best Method ===
SVM (Nested LOOCV): 56/70 (80.0%)
95% Wilson CI: [69.2%, 87.7%]

9 Discussion

We applied three TDA filtration strategies — Vietoris-Rips, radial, and directional — to classify Diptera families from wing venation images, extracting extended summary statistics per persistence diagram and removing low-relevance moments (skewness and kurtosis).

9.1 Key findings

  1. Three filtrations capture complementary information: The Vietoris-Rips filtration on point-cloud samples captures the global loop structure of the wing venation (number and prominence of wing cells). The radial H0 filtration captures the center-to-periphery organization: how vein segments merge as the filtration grows outward from the centroid. The directional H0 filtrations capture vein connectivity along four axes (horizontal, vertical, and both diagonals), encoding how disconnected components merge under directional sweeps.

  2. Pruned extended statistics remain sufficient: After removing skewness and kurtosis, we retain 17 statistics per diagram (count, max/total persistence, quantiles, entropy, median birth/death, etc.). With 6 diagrams × 17 features = 102 total features for 72 samples, the feature-to-sample ratio is ~1.42:1, reducing overfitting risk while preserving discriminative signal.

  3. Feature ablation reveals which filtrations matter: The ablation study compares Rips alone, radial H0 alone, directional H0 alone, and their combinations. This provides evidence about whether global topology (Rips), radial organization (radial H0), or directional sweep information (directional H0) is most discriminative.

  4. Why some filtrations were dropped:

    • Radial H1: On pixelated binary images, few genuine 1-cycles survive the radial filtration, making radial H1 mostly noise.
    • EDT (Euclidean Distance Transform): On binarized images, the EDT is trivially related to the binary structure, providing little additional information beyond what Rips already captures.
    • Cubical (grayscale sublevel-set): After binarization, the grayscale information is lost, so cubical persistence reduces to computing persistence on a binary image — equivalent to connected-component analysis.
  5. Nested LOOCV provides honest evaluation: Standard LOOCV can be optimistic when hyperparameters are tuned on the same data. Nested LOOCV (with 4-fold inner CV for hyperparameter selection) gives unbiased accuracy estimates.

  6. Statistical rigor: We report LOOCV accuracy with Wilson confidence intervals, and nested LOOCV for unbiased evaluation.

9.2 Limitations

  • Class imbalance: Tipulidae has 12 samples while some families have only 2–3, which affects classifier performance.
  • Image quality and preprocessing parameters (blur, threshold) influence topological features.
  • With only 72 samples, confidence intervals remain wide regardless of method.
  • Wings are manually segmented and binarized; automated segmentation could introduce different error patterns.

9.3 Future work

  • Extend dataset with more specimens per family, especially underrepresented ones
  • Improve imaging/segmentation quality to reduce noise
  • Apply extended persistence or zigzag persistence for richer invariants
  • Investigate which specific topological features (e.g., how many loops, persistence of largest features) correspond to known vein characters in Diptera taxonomy
  • Try the analysis on non-binarized (grayscale) images, where EDT and cubical filtrations would be more informative
  • Explore directional H1 persistence on higher-quality images where loop detection under sweeps may be more reliable

Citation

BibTeX citation:
@online{vituri_f._pinto2026,
  author = {Vituri F. Pinto, Guilherme and Ura, Sergio and , Northon},
  title = {Diptera Wing Classification Using {Topological} {Data}
    {Analysis}},
  date = {2026-02-26},
  langid = {en},
  abstract = {We apply tools from Topological Data Analysis (TDA) to
    classify Diptera families based on wing venation patterns. Using
    three complementary filtration strategies -\/-\/- Vietoris-Rips on
    point clouds, radial filtrations, and directional (height)
    filtrations on wing images -\/-\/- we extract H0 and H1 topological
    features via extended summary statistics and compare classifiers via
    leave-one-out cross-validation. We focus on interpretable models
    (LDA, Decision Trees) to identify explainable topological criteria
    that distinguish families.}
}
For attribution, please cite this work as:
Vituri F. Pinto, Guilherme, Sergio Ura, and Northon. 2026. “Diptera Wing Classification Using Topological Data Analysis.” Earth and Space Science. February 26, 2026.